Talk by Santani Teng of MIT. Given to the Redwood Center for Theoretical Neuroscience at UC Berkeley.
Abstract. Functional changes in visual cortex as a consequence of blindness are a major model for studying crossmodal neuroplasticity. Previous work has shown that traditionally visual cortical regions activate in response to a wide range of nonvisual tasks. However, the underlying representations in visual cortex, while often inferred to be similar for the blind and the sighted, have almost never been examined in detail. Here we used multivariate pattern analysis of magnetoencephalography (MEG) data to compare visual letter recognition with Braille reading. We presented blind and sighted volunteers with 10 single letters in random order while recording brain activity. Sighted subjects were presented with Roman visual letters, while blind subjects were presented with Braille tactile letters. We used linear support vector machines to decode letter identity from MEG data and found that the classification time course of letter recognition in sighted subjects was generally faster, briefer, and more consistent than in blind subjects. We then used representational similarity analysis (RSA; Kriegeskorte et al., 2008) to compare neural representations of the letters both within and between sighted and blind subject groups. This analysis revealed high within-group correlations at ~200 ms for blind and ~600 ms for sighted subjects. Correlations between groups were an order of magnitude lower, though overall significantly positive. The results suggest that blind and sighted letter reading may be largely driven by distinct processes, but that brain regions recruited crossmodally may be performing some common underlying computations for analogous tasks. More generally, the application of RSA to both time-resolved and space-resolved (fMRI) neural data has the potential to address longstanding unanswered questions about the dynamics of reorganized functional networks in blind or otherwise altered brains.